As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
A storyboard is a roadmap for video creation which consists of shot-by-shot images to visualize key plots in a text synopsis. Creating video storyboards however remains challenging which not only requires association between high-level texts and images, but also demands for long-term reasoning to make transitions smooth across shots. In this paper, we propose a new task called Text synopsis to Video Storyboard (TeViS) which aims to retrieve an ordered sequence of images to visualize the text synopsis. We construct a MovieNet-TeViS benchmark based on the public MovieNet dataset. It contains 10K text synopses each paired with keyframes that are manually selected from corresponding movies by considering both relevance and cinematic coherence. We also present an encoder-decoder baseline for the task. The model uses a pretrained vision-and-language model to improve high-level text-image matching. To improve coherence in long-term shots, we further propose to pre-train the decoder on large-scale movie frames without text. Experimental results demonstrate that our proposed model significantly outperforms other models to create text-relevant and coherent storyboards. Nevertheless, there is still a large gap compared to human performance suggesting room for promising future work.
translated by 谷歌翻译
With the rapid deployment of graph neural networks (GNNs) based techniques into a wide range of applications such as link prediction, node classification, and graph classification the explainability of GNNs has become an indispensable component for predictive and trustworthy decision-making. Thus, it is critical to explain why graph neural network (GNN) makes particular predictions for them to be believed in many applications. Some GNNs explainers have been proposed recently. However, they lack to generate accurate and real explanations. To mitigate these limitations, we propose GANExplainer, based on Generative Adversarial Network (GAN) architecture. GANExplainer is composed of a generator to create explanations and a discriminator to assist with the Generator development. We investigate the explanation accuracy of our models by comparing the performance of GANExplainer with other state-of-the-art methods. Our empirical results on synthetic datasets indicate that GANExplainer improves explanation accuracy by up to 35\% compared to its alternatives.
translated by 谷歌翻译
Spatial-temporal (ST) graph modeling, such as traffic speed forecasting and taxi demand prediction, is an important task in deep learning area. However, for the nodes in graph, their ST patterns can vary greatly in difficulties for modeling, owning to the heterogeneous nature of ST data. We argue that unveiling the nodes to the model in a meaningful order, from easy to complex, can provide performance improvements over traditional training procedure. The idea has its root in Curriculum Learning which suggests in the early stage of training models can be sensitive to noise and difficult samples. In this paper, we propose ST-Curriculum Dropout, a novel and easy-to-implement strategy for spatial-temporal graph modeling. Specifically, we evaluate the learning difficulty of each node in high-level feature space and drop those difficult ones out to ensure the model only needs to handle fundamental ST relations at the beginning, before gradually moving to hard ones. Our strategy can be applied to any canonical deep learning architecture without extra trainable parameters, and extensive experiments on a wide range of datasets are conducted to illustrate that, by controlling the difficulty level of ST relations as the training progresses, the model is able to capture better representation of the data and thus yields better generalization.
translated by 谷歌翻译
变压器验证引起了机器学习研究和行业的越来越多的关注。它正式验证了变压器对对抗性攻击的鲁棒性,例如用同义词交换单词。但是,由于以中线为中心的计算,变压器验证的性能仍然不令人满意,这与标准神经网络有显着差异。在本文中,我们提出了信仰,这是用于GPU的变压器验证的有效框架。我们首先提出一个语义意识的计算图转换,以识别语义信息,例如变压器验证中的结合计算。我们利用此类语义信息,以在计算图级别启用有效的内核融合。其次,我们提出了一个验证专门的内核手工艺品,以有效地将变压器验证映射到现代GPU。该手工艺者利用了一组GPU硬件支持,以加速通常是内存密集型的验证专业操作。第三,我们提出了一个专家指导的自动调整,以纳入有关GPU后端的专家知识,以促进大型搜索空间探索。广泛的评估表明,Faith在最先进的框架上实现了$ 2.1 \ times $至$ 3.4 \ times $($ 2.6 \ times $)的加速。
translated by 谷歌翻译
大型语言模型(LLM)从人类的指示中解开了任务计划的新功能。但是,事先尝试将LLMS应用于现实世界的机器人任务受到周围场景中缺乏接地的限制。在本文中,我们开发了NLMAP,这是一个开放式摄影和可查询场景表示,以解决此问题。 NLMAP是一个框架,可以将上下文信息收集到LLM计划者中,从而在生成上下文条件条件计划之前,可以在场景中查看和查询可用的对象。 NLMAP首先使用视觉语言模型(VLM)建立自然语言可查询场景表示。基于LLM的对象建议模块解析指令并提出涉及的对象,以查询场景表示以获取对象可用性和位置。然后,LLM规划师计划提供有关场景的此类信息。 NLMAP允许机器人在没有固定的对象列表或可执行选项的情况下操作,从而使真实的机器人操作无法通过以前的方法实现。项目网站:https://nlmap-saycan.github.io
translated by 谷歌翻译
自我模型是一种过程,例如动物或机器等代理商学会创建自己动态的预测模型。一旦被捕获,这种自模型就可以允许代理使用自我模型在内部计划和评估各种潜在行为,而不是使用昂贵的物理实验。在这里,我们量化了这种自模型对机器人的复杂性的好处。我们发现与直接学习基线相比,机器人拥有的自由度数量与自模型的附加值之间的R2 = 0.90相关性。这一结果可能有助于激发日益复杂的机器人系统中的自我建模,并阐明动物和人类自我模型的起源,并最终自我意识。
translated by 谷歌翻译
与自然图像相比,医学图像很难获取,标签成本很高。作为一种无监督的学习方法,对比学习可以更有效地利用未标记的医学图像。在本文中,我们使用了一种基于变压器的对比学习方法,并通过转移学习创新了对比度学习网络。然后,将输出模型转移到下游腮腺分割任务,该任务改善了测试集上腮腺分割模型的性能。改善的DSC为89.60%,MPA为99.36%,MIOU为85.11%,HD为2.98。与使用监督学习模型作为腮腺分割网络的预训练模型的结果相比,所有四个指标均显示出显着改善。此外,我们发现,通过对比度学习模型对细分网络的改进主要在编码器部分中,因此本文还试图为解码器部分构建对比度学习网络,并讨论了在构建过程中遇到的问题。
translated by 谷歌翻译
在本文中,我们提出了与IEEE计算机协会在CVPR 2022上同时与IEEE计算机协会研讨会同时举行的多手术检测挑战。我们的多手术检测挑战旨在检测自动图像操作,包括但不限于图像编辑,图像合成,图像合成,图像,图像,图像,图像合成,图像,图像编辑一代,图像Photoshop等。我们的挑战吸引了来自世界各地的674支团队,约有2000个有效的结果提交数量。我们邀请了前十支球队为挑战提供解决方案,其中三支球队在大结局中获得了奖项。在本文中,我们介绍了前三名团队的解决方案,以增强图像伪造检测领域的研究工作。
translated by 谷歌翻译
惯性测量单元(IMU)在机器人研究中无处不在。它为机器人提供了姿势信息,以实现平衡和导航。但是,人类和动物可以在没有精确的方向或位置值的情况下感知其身体在环境中的运动。这种互动固有地涉及感知和动作之间的快速反馈回路。这项工作提出了一种端到端方法,该方法使用高维视觉观察和动作命令来训练视觉自模型进行腿部运动。视觉自模型学习机器人身体运动与地面纹理之间的空间关系从图像序列变化。我们证明机器人可以利用视觉自模型来实现机器人在训练过程中看不见的现实环境中的各种运动任务。通过我们提出的方法,机器人可以在没有IMU的情况下或在没有GPS或弱地磁场的环境中进行运动,例如该市的室内和Urban Canyons。
translated by 谷歌翻译